Maliheh Sarviyeh; Hossein Akbari; Fakhrosaadat Mirhosseini; Hamidreza Baradaran; Afsaneh Hemati; Jalil Kohpayezadeh
Volume 21, Issue 1 , March and April 2015, , Pages 145-154
Abstract
Background: Direct observation is a method for objective assessment of practical skills and giving feedback to the students. This study investigated the reliability and validity testing direct observation of practical skills in the assessment of midwifery students’ clinical skills.
Material and Method: ...
Read More
Background: Direct observation is a method for objective assessment of practical skills and giving feedback to the students. This study investigated the reliability and validity testing direct observation of practical skills in the assessment of midwifery students’ clinical skills.
Material and Method: In this cross-sectional study, participants included 44 midwifery students Internship of Kashan University Medical Sciences selected through census sampling method. Based on faculty members of Guilan, Kashan and Zahedan universities’ opinions foure midwifery skills choosed among the basice clinical skills and prepared related check list. Students were obzerved over the procedure in a real work environment by the experimenter and recorded the results according to the check list and objective feedback was given to the students. Content validity - criterion validity (correlation between mean score of clinical and theoretical courses of midwifery and dops - Each item relationship with dops each skill) - validity (internal structure), reliability (internal consistency determination - rater reliability) was analyzed using the SPSS software.
Result: Dops test content validity index and content validity ratio were reported over 0.75% and 0.50%, respectively.Dops scores correlated with theoretical, 0.071 (p =0.647) and clinical 0.093 (p =0.548).The dops significantly correlated with the total score of each skill expressing the desired internal validity (p
SeyyedMohammad Fereshtehnejad; Hamidreza Baradaran; Maziar Moradi Lakeh
Volume 20, Issue 5 , March and April 2014, , Pages 611-622
Abstract
Abstract
Background: Nowadays, it is of utmost importance to critically appraise the research evidences presented in scientific congresses by the audiences. In addition to improvement in scientific and practical skills of critical appraisal, it is important to use a standard framework as the major tool ...
Read More
Abstract
Background: Nowadays, it is of utmost importance to critically appraise the research evidences presented in scientific congresses by the audiences. In addition to improvement in scientific and practical skills of critical appraisal, it is important to use a standard framework as the major tool for peer reviewing. We aimed to assess the validity and reliability of one proposed checklist for critical appraisal of the original research abstracts by student peer reviewers.
Methods and Materials: This study was a part of an educational interventional project that was performed in a setting of a workshop where 40 medical students from the medical faculties of the universities in Tehran were recruited. Participants were selected using a non-probability purposive sampling method. Educational curriculum of the workshop included clarifying explanations on the 31-item checklist for peer reviewing of the abstracts as well as several tips about each item using lectures, simulations and group discussions during 10 hours. Medical students used the checklist twice, at beginning and the end of the workshop to score three sample abstracts. Data were collected and analyzed using Spearman correlation (internal consistency) and Cronbach’s’ alpha methods to calculate the reliability of different items and domains of the introduced peer reviewing checklist by SPSS software. Moreover, Delphi method was applied to confirm the validity of the instrument by experts’ opinion.
Results: A group of experts finally confirmed the validity of this checklist by means of Delphi method. Moreover, internal consistency of the main domains of the checklist consisting of “Introduction”, “Methods”, “Results” and “Conclusion” were statistically significant (P